X’s AI bot, Grok, was launched in late 2023 under the promise that it'd provide users with helpful additional context about posts on the platform. But this week, it’s become better known for generating explicit deepfakes of underage women, and celebrities like Kate Middleton and Taylor Swift. Because recently, any person who has images of themselves posted on X has become susceptible to trolling user requests to “undress her completely” or “show her in a bikini.”

In what has been called everything from invasive to “absolutely appalling,” the bot’s colloquial “spicy mode” feature has allowed people to generate sexualized images of others, and the victims of these violations are usually women. A growing number of X users started to notice the disturbing NSFW images late last week and flagged the issue to the official Grok page en masse. “There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing, like the example you referenced,” the account replied to one report last Thursday. “xAI has safeguards, but improvements are ongoing to block such requests entirely.” Similarly, xAI staffer Parsa Tajik noted in a post that their team is “looking into further tightening our [guardrails].”

Those tightened guardrails have been slow to take effect. Right now, all one has to do is tag the Grok bot and request that it alter an image any way they see fit, and the bot will promptly provide a full violation of privacy. And this week, Rolling Stone reported that Grok is generating at least one nonconsensual sexualized image per minute. One high-profile target of this was 27-year-old conservative author/influencer Ashley St. Clair, who also allegedly shares a child with X’s owner, Elon Musk. Ashley was caught in a back-and-forth with the Grok bot last weekend when she requested that it remove altered images of her that were originally taken when she was underage. And the exchange exposes the bot's delayed action in response to her numerous requests. “Hey @Grok,” one thread begins, “I am 14 in this photo. A tasteless, silly photo I took as a kid (with too much unmonitored internet access), but you’re now undressing a minor with sexually suggestive content! Please remove and send me post ID for legal filing.”

user interface of a social media platform featuring a post about content removal
X/@stclairashley
An exchange between Ashley St. Clair and Grok

But high-profile users haven’t been the only targets of these manipulated images. Julie Yukari, a Brazilian musician, found that photos she posted from her New Year’s Eve festivities were tampered with by Grok after users replied to her post in droves with requests for the bot to undress her. “[I want] to hide from everyone’s eyes, and feel shame for a body that is not even mine, since it was generated by AI," she told Reuters in an interview.

The current Grok controversy is far from an isolated one. Several AI “nudifying” apps have come under fire in recent years for the violent misogyny their services allow. Last May, US lawmakers took action against them by signing the Take It Down Act into law, which criminalizes the distribution of AI-generated sexually explicit images. The Act mandates that platforms like X remove any content of this nature within 48 hours of a report. But X has taken longer to remove content flagged this week.

Explicit deepfakes can have long-term psychological effects on victims, psychiatrist Dr. Maya Reynolds tells Cosmo. “They’re psychologically destructive because they breach an individual's sense of autonomy and identity, even if the content is fabricated,” she shares.

“Many people go through anxiety, guilt, hypervigilance and insomnia. The long-term damage comes from the loss of safety in the digital world. Knowing about the fact that explicit content can be developed and shared without consent gives birth to chronic vulnerability and demoralizes people from engaging online, making it a serious mental health concern, in addition to a tech concern.”

So far, it’s difficult to tell how bothered X owner Elon Musk is by any negative impact Grok’s digital undressing might be causing. Last weekend, he posted that “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.” He seemed to have a change of heart when he allegedly reposted an image of a toaster in a bikini with laughing emojis.